People depend on difference sources of information and news to stay informed in everyday life. They often do not have much time to go through an entire news article to understand the content, yet they want to know all the important elements the article. Hence there is a need of summarizing lengthy text into concise, understandable summaries that helps the people to minimize the reading time.
To ease this news abstraction process, traditional Natural Language Processing based approaches exploit statistical information to extract important text snippets, compile and present them as summary to the users. This form of extractive summarization often fails to compress lengthy, detailed text well rather only picks up key words or phrases from the original text. Still, there is a requirement for an efficient abstractive summarization approach that can concise a news article and paraphrase the content into an understandable, grammatically proper text summary.
Recently, this abstractive summarization is treated as a natural language generation
problem and solved using sequential modeling deep learning approaches where the
sequence-to- sequence learning is accomplished with training of large text datasets. Few
RNN based approaches have been used for generating headlines for a news story.
However, news summarization from paragraphs of news story is considered more
complex than headline generation.
In this project, our primary scope is to develop a deep learning solution by
experimenting different generative sequential models such as LSTM, GRU, bidirectional
RNNs, encoder-decoder networks with and without attention mechanism on large scale
news dataset. This effort is trying to train the generative deep network and a new
language model (or reuse existing models and go for transfer learning) for this purpose.